The application of operations research in the mining industry dates back to the beginning of operations research as an applied science. In “Mining Coal or Finding Terrorists: The Expanding Search Paradigm,” S. Alpern and T. Lidbetter examine the decision problem faced by a coal mining company that must choose where to mine in a network of coal seams with known densities of coal, in order to maximize the coal extraction rate. To analyze this problem, the authors define “expanding search,” a search model based on the assumption that the time required to move mining equipment to a nearby site is negligible compared to the time spent digging. They solve the decision problem for a certain class of networks, then consider the analogous problem in which the distribution of densities of the coal seams is unknown. The same principles apply to a search for a hidden object, terrorist, or “hider” in a network by a team of searchers. To thrive, many firms need healthy long-term relationships with critical suppliers. Can a firm prevent its suppliers from getting “too comfortable” to improve? In “Dynamic Business Share Allocation in a Supply Chain with Competing Suppliers,” H. Li, H. Zhang, and C. H. Fine examine the use of performance-based business share allocations as incentives for driving continuous supplier effort in a dual-sourcing supply chain. They propose a two-agent repeated moral hazard model and characterize the optimal contract through a fixed-point analysis. Optimal dynamic contracts can drive suppliers into either a stable status quo or ongoing fierce competition over the long run, illustrating, respectively, the “carrot” and “stick” of supplier incentives. The results and the discussion underscore the value of using quantitative supplier assessment systems in practice. The two-echelon vehicle routing problem (2E-VRP) represents the core problem of all multiechelon distribution networks. In the 2E-VRP, deliveries from a central depot to customers follow two levels of delivery. The first level consists of vehicle routes between the central depot and a subset of intermediate depots, or satellites; the second consists of vehicle routes from a single satellite to a group of customers. The objective is to service each customer exactly once with the second-level routes, while minimizing the total routing cost. In “An Exact Algorithm for the Two-Echelon Capacitated Vehicle Routing Problem,” R. Baldacci, A. Mingozzi, R. Roberti, and R. Wolfler Calvo develop an exact approach for solving the 2E-VRP. Their method outperforms the current best-known exact methods, both for the quality of the lower bounds achieved and the number and dimensions of the instances solved to optimality. Medium-size real-world instances involving 100 customers and six satellites were solved to optimality. Capacitated arc routing problems involve finding a set of lowest cost routes for a fleet of capacitated vehicles to provide service to a subset of links in a given network. In “An Exact Algorithm for the Capacitated Arc Routing Problem with Deadheading Demand,” E. Bartolini, J.-F. Cordeau, and G. Laporte consider a generalization called the capacitated arc routing problem with deadheading demand (CARPDD), in which a resource such as time or energy is consumed each time a link is traversed, but the consumption level depends on the activity performed. They describe practical applications of the CARPDD and propose a set partitioning (SP) formulation strengthened by a new class of valid inequalities. They develop an exact algorithm based on the SP formulation that solves both the CARPDD and the classical capacitated arc routing problem (CARP). This algorithm is computationally tested on a large set of CARPDD and CARP instances. Companies increasingly communicate with their customers through instant messaging (IM) over the Internet, which lets service agents serve multiple customers simultaneously. In “Staffing and Control of Instant Messaging Contact Centers,” J. Luo and J. Zhang consider a new type of queueing model to capture the queueing dynamics of IM contact centers. The model shares the features of a many-server queue as well as a processor-sharing queue. Under a heavy-traffic regime, the number of servers increases while the service rate for each server remains unscaled but depends on the number of customers who are sharing it. Fluid approximation involving a stochastic averaging principle is established to study the performance and optimal control of such systems. In “Flexible Server Allocation and Customer Routing Policies for Two Parallel Queues When Service Rates Are Not Additive,” H.-S. Ahn and M. E. Lewis consider how routing and allocation can be coordinated to meet the challenge of demand variability in a parallel queueing system serving two types of customers. A decision maker decides whether to keep customers at the station where they arrived or to reroute them to the other station. At the same time, the decision maker has two servers and must decide where to allocate their effort. The authors analyze this joint decision-making scenario, but with two added important twists: they allow the combined service rate (when the servers work at the same station) to be superadditive or subadditive, and they allow routing costs to be strictly positive. An optimal control policy under the discounted or long-run average cost criteria is discussed in detail. When making trade-offs in decisions with multiple objectives, a decision maker must make numerous assessments unless some decomposition of the utility function is performed. Utility independence and one-switch independence conditions are powerful assumptions that work well for this purpose. However, when their conditions are not satisfied in a given problem, we are left to wonder what functional form of utility function the decision maker should use. In “Utility Copula Functions Matching All Boundary Assessments,” A. E. Abbas proposes a method to construct utility surfaces that capture a wide range of trade-off assessments in multiattribute decision problems by direct utility elicitation using univariate utility assessments at the boundary values. This approach also enables sensitivity analysis to the widely used functional forms of utility functions. Obtaining probabilistic judgments from subject-matter experts is difficult, especially when the experts are not quantitatively trained. In “Expert Elicitation of Adversary Preferences Using Ordinal Judgments,” C. Wang and V. M. Bier introduce a simple elicitation process in which intelligence experts rank order the attractiveness of a collection of possible terrorist targets (or attack strategies), where each target's attractiveness or utility to the terrorist(s) is assumed to involve multiple attributes. The probability distributions over the various attribute weights are then mathematically derived using probabilistic inversion (PI) or Bayesian density estimation (BDE). This process reduces the burden involved in traditional methods of attribute-weight elicitation and explicitly captures the uncertainty and disagreement among experts. The work also makes broader methodological contributions to the utility assessment and expert elicitation fields by using “unobserved attributes” to ensure PI feasibility, applying BDE to ordinal data in a rigorous manner and elucidating the relationship between PI and BDE. Many combinatorial optimization problems have an objective function that can be viewed as being composed of more than one function. In “A General Framework for Designing Approximation Schemes for Combinatorial Optimization Problems with Many Objectives Combined into One,” S. Mittal and A. S. Schulz present a universal framework that yields algorithms returning near-optimal solutions for such problems. Examples of problems that can be tackled in their framework include the max-min resource-allocation problem, scheduling jobs on parallel machines with multiple resources to minimize makespan, assortment optimization problems that arise in revenue management, and problems in which the objective is the product of a fixed number of linear functions. Observational, or nonrandomized, studies present difficulties for estimating treatment effects due to bias from confounding factors. A popular strategy for reducing bias is to match treated individuals to similar untreated, or control, individuals. The success of the matching procedure is typically assessed using a balance measure on the overall groups (treatment and control), rather than the individual matches. In “Balance Optimization Subset Selection (BOSS): An Alternative Approach for Causal Inference with Observational Data,” A. G. Nikolaev, S. H. Jacobson, W. K. T. Cho, J. J. Sauppe, and E. C. Sewell propose a new model, balance optimization subset selection (BOSS), for directly identifying a treatment and control group possessing optimal balance. The resulting problem can be solved using optimization algorithms and analytics commonly used in operations research. Computational experiments demonstrate that BOSS can identify treatment and control groups with better balance than traditional matching methods. In “A Linear Programming Approach to Nonstationary Infinite Horizon Markov Decision Processes,” A. Ghate and R. L. Smith obtain duality results, characterize extreme points as basic feasible solutions, and design a simplex algorithm for the infinite dimensional linear program that is equivalent to Bellman's equations of optimality for nonstationary infinite horizon Markov decision processes (MDPs). This results in a nonstationary counterpart of the linear programming framework that has proven fruitful in studying stationary MDPs. It also provides an alternative to the planning horizon method, which is currently the only existing approach for solving nonstationary infinite horizon MDPs. Although the linear program includes an infinite quantity of data, each pivot of the simplex method uses a finite subset of this data, performs finite computations, and achieves a sufficient improvement in the objective so that the resulting sequence of extreme points converges to the optimal value. One of the most common enhancements to standard models of data envelopment analysis (DEA) is the specification of certain limits on the shadow prices of the inputs and outputs. These limits are referred to as weight restrictions. It is well known that the use of weight restrictions can result in an infeasible multiplier DEA model, but the exact nature of this phenomenon has not been fully explored. In “Weight Restrictions and Free Production in Data Envelopment Analysis,” V. V. Podinovski and T. Bouzdine-Chameeva prove that the infeasibility and some other lesser-known problems with DEA models arise only if the weight restrictions induce either free or unlimited production in the underlying production technology. Interestingly, some of these problems may be hidden and not easily observed from the optimal solutions to standard DEA models. To address this phenomenon, the authors develop analytical and computational methods that detect free production in DEA models with weight restrictions. Probabilistic set covering problems arise in applications such as facility location, vehicle routing, and crew scheduling. These problems involve a combination of the challenges of probabilistic and integer programming. Existing research on these problems has focused on the special case in which uncertainties are independent. In “Probabilistic Set Covering with Correlations,” S. Ahmed and D. J. Papageorgiou develop deterministic integer programming reformulations of probabilistic set covering problems involving correlated uncertainties. Their approach is based on linear programming-based bounds on probability of unions of events. By exploiting certain substructures of the problem, they develop strengthening procedures for these integer programming formulations. Computational results illustrate that the modeling approach can outperform formulations in which correlations are ignored and that the developed algorithms can significantly reduce overall computation time. Given a collection of sets, a hitting set contains at least one element of each given set (i.e., it “hits” each given set). Finding a hitting set of minimum cardinality is known as the hitting set problem. In “The Implicit Hitting Set Approach to Solve Combinatorial Optimization Problems with an Application to Multigenome Alignment,” E. Moreno-Centeno and R. M. Karp define implicit hitting set problems as combinatorial optimization problems that can be easily cast as hitting set problems where the collection of sets to be hit is too large to list explicitly. In addition, they show that several classic combinatorial optimization problems are implicit hitting set problems and give an algorithm for solving all such problems. Finally, they show that this approach outperforms the current best-known exact methods for solving multigenome alignment problems. Existing models for efficiently making production lot-sizing and transportation decisions in serial supply chains with concave costs are limited to so-called nonspeculative cost functions, where it is always economical to keep inventory upstream as late as possible. Although this nonspeculative cost structure can model some value-added flow in supply chains, it does not effectively model the impact of transportation or holding costs that change dramatically over time, or general economies of scale in transportation. In “Basis Paths and a Polynomial Algorithm for the Multi-stage Production-Capacitated Lot-Sizing Problem,” H.-C. Hwang, H.-S. Ahn, and P. Kaminsky develop a polynomial algorithm that can make optimal decisions with general concave cost functions. To do this, they introduce an approach for overcoming the limitations of previous dynamic programming approaches. Unlike traditional algorithms that proceed unidirectionally in time, this approach moves both forward and backward in time and could potentially help solve previously intractable problems. Orthogonal packing in two and more dimensions is a class of combinatorial problems modeling real-world applications concerned with packing, cutting, and scheduling operations. One of the basic problem settings is the orthogonal-packing feasibility problem (OPP), which asks whether a given set of items fits in a given container. This problem is hard: in two dimensions, exact solving is realistic only up to about 20 items. In “LP Bounds in an Interval-Graph Algorithm for Orthogonal-Packing Feasibility,” G. Belov and H. Rohling consider one-dimensional relaxations of the OPP. These relaxations can be efficiently tightened using partial information on a solution. Such partial information can be, for example, the set of item projections' overlaps. The interval-graph algorithm is an exact method for the OPP operating on the overlapping relations. The integration of one-dimensional bounds tightened by partial information of the branching-tree nodes leads to a stabilization and speed-up of the method. Value-function-based approaches for solving two-stage pure stochastic integer programs with random right-hand sides have been proposed recently. Because they must store the value functions explicitly, these approaches are limited in the number of rows they can handle. In “On a Level-Set Characterization of the Value Function of an Integer Program and Its Application to Stochastic Programming,” A. C. Trapp, O. A. Prokopyev, and A. J. Schaefer address this limitation by exploiting a level-set approach that characterizes the value function of a pure integer program with inequality constraints. They implement a global branch-and-bound approach that yields encouraging computational results on large instances. Stochastic simulation can provide high-fidelity models of real or conceptual systems, but simulation is often a slow and clumsy tool for decision support. Metamodels can be fitted to simulation output and used as fast and accurate approximations for real-time decision making and system optimization. Stochastic kriging is a metamodeling framework for simulation that is flexible enough to provide good global approximations of a performance surface; however, its flexibility means that it may not enforce known response properties, such as monotonicity, or avoid bumpiness of the predicted surface. In “Enhancing Stochastic Kriging Metamodels with Gradient Estimators,” X. Chen, B. E. Ankenman, and B. L. Nelson improve stochastic kriging by exploiting estimators of the response-surface gradient. They show that enhanced stochastic kriging significantly improves response surface prediction over stochastic kriging without gradient estimators. They also establish desirable properties of potential gradient estimators, focusing on the infinitesimal perturbation analysis and likelihood ratio/score function methods.